Outcome or state probability can depend on outcome or state sequence {Markov process}. Events can depend on previous-event order. Transition graphs show event orders needed to reach all events.
Taking samples can find transition matrices, or using transition matrices can generate data points {Markov chain Monte Carlo method} (MCMC). MCMC includes jump MCMC, reversible-jump MCMC, and birth-death sampling.
Hidden Markov models {Markov-modulated Poisson process} can have continuous time.
Functions {autocorrelation function} (ACF) can measure if process description needs extra dimensions or parameters.
An average {autoregressive integrated moving average} (ARIMA) can vary around means set by hidden Markov chains.
Minus two times Schwarz criterion {Bayesian information criterion} (BIC) approximates hidden-Markov-model number of states.
Models {Bayesian model} can estimate finite-state Markov chains.
Methods {direct Gibbs sampler, sampling} can sample states in hidden Markov chains.
Algorithms {EM algorithm} can have E and M steps.
Hidden Markov models {finite mixture model} can have equal transition-matrix rows.
Stochastic forward recursions and stochastic and non-stochastic backward recursions {forward-backward Gibbs sampler, sampling} can sample distribution.
Recursion {forward-backward recursion, sampling} can adjust distribution sampling. It is similar to Kalman-filter prediction and smoothing steps in state-space model.
Finite-state Markov chain {hidden Markov model} (HMM) samples distribution.
model
Hidden Markov models are graphical models. Bayesian models are finite-state Markov chains.
purposes
Hidden Markov chains model signal processing, biology, genetics, ecology, image analysis, economics, and network security. Situations compare error-free or non-criminal distribution to error or criminal distribution.
transition
Hidden-distribution Markov chain has initial distribution and time-constant transition matrix.
calculations
Calculations include estimating parameters by recursion {forward-backward recursion, Markov} {forward-backward Gibbs sampler, Markov} {direct Gibbs sampler, Markov}, filling missing data, finding state-space size, preventing switching, assessing validity, and testing convergence by likelihood recursion.
Models {hidden semi-Markov model} can maintain distribution states over times.
In Monte Carlo simulations, if smaller particles surround particle, where will particle be at later time? Answer uses white noise {Langevin equation} {Langevin diffusion}: 6 * k * T * (lambda / m) * D(t), where lambda/m is friction coefficient, T is absolute temperature, k is elasticity, and D(t) is distance at time t. At much later time, distance = (k * T) / (m * lambda).
Recursion {likelihood recursion} can calculate over joint likelihood averaged over time, typically taking logarithm.
Methods {marginal estimation} can estimate hidden Markov distribution and probability.
Methods {maximum a posteriori estimation} (MAP) can estimate hidden Markov distribution and probability.
Methods {Metropolis-Hastings sampler} can sample from multivariate normal distribution centered on current data point, depending on Hastings probability.
Distributions {predictive distribution} can measure how well models predict actual data.
Asymptotic approximations {Schwarz criterion} to logarithm of state-number probability can depend on likelihood maximized over transition matrix, sample size, and free-parameter number.
Models {state-space model} can use Gaussian distributions for hidden Markov models.
Algorithms {Viterbi algorithm} can find most likely Bayesian trajectory, using forward-backward recursion with maximization, rather than averaging, and can estimate hidden Markov distribution.
Outline of Knowledge Database Home Page
Description of Outline of Knowledge Database
Date Modified: 2022.0225